-
Notifications
You must be signed in to change notification settings - Fork 31k
🚨 Implement gradient checkpointing in GPTBigCode #41818
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
🚨 Implement gradient checkpointing in GPTBigCode #41818
Conversation
Support for gradient checkpointing was lost in the major refactoring in PR huggingface#38635 and this is the attempt to re-add it. I extended the tests to - test `use_reentrant=True` and `False` - make sure `model.train` is called so that gradient checkpointing works; this is a limiation of the tests currently used by GPTBigCode - make sure that one (the first) gradient checkpointing layer is called - make sure that the same non-zero grads are there for normal and checkpointing runs - this is something we tripped over before in PEFT due to the possibly incompletely stored runtime environment in the checkpointed forward step, see also peft#2826 Note that the invocation of `GPTBigCodeBlock.forward` has changed: - `layer_past` is now passed as a keyword argument so that `GradientCheckpointingLayer.__call__` can see and filter this parameter (`use_reentrant=False` fails otherwise) - `{encoder_}hidden_states` are still passed as positional arguments so that `torch.utils.checkpoint.checkpoint` receives them as pos. args and computes gradients for these (kwargs would be filtered by `GradientCheckpointingLayer`).
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The tests are neat, I think we should move them to common tests tho. Not exactly sure why it was specially treated here.
And ig there will be a need for another round to check similar models that may have been accidentally overriden with the ckpting layer 😓 not necessarily this PR tho
| encoder_hidden_states: Optional[torch.Tensor] = None, | ||
| layer_past: Optional[Cache] = None, | ||
| attention_mask: Optional[torch.Tensor] = None, | ||
| encoder_hidden_states: Optional[torch.Tensor] = None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's not change the order here, we could break things for users here. Rather change the args, kwargs positions if necessary on the module call
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure that this is possible. It is mandatory that we pass layer_past as keyword argument, otherwise GradientCheckpointingLayer will not be able to remove it from the kwargs in case of gradient checkpointing. On the other hand every input that may require gradients (hidden_states, encoder_hidden_states) must be passed as positional argument for checkpoint() to work. Maybe I'm missing something but I don't think we can bring those together without moving encoder_hidden_states up in the list.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean that the signature should stay the same, e.g. see
transformers/src/transformers/models/gpt_bigcode/modeling_gpt_bigcode.py
Lines 586 to 596 in 84d19be
| def forward( | |
| self, | |
| hidden_states: Optional[tuple[torch.Tensor]], | |
| layer_past: Optional[torch.Tensor] = None, | |
| attention_mask: Optional[torch.Tensor] = None, | |
| head_mask: Optional[torch.Tensor] = None, | |
| encoder_hidden_states: Optional[torch.Tensor] = None, | |
| encoder_attention_mask: Optional[torch.Tensor] = None, | |
| use_cache: Optional[bool] = False, | |
| output_attentions: Optional[bool] = False, | |
| **kwargs, |
It will need to adjust the calls from the module above like
transformers/src/transformers/models/gpt_bigcode/modeling_gpt_bigcode.py
Lines 901 to 910 in 84d19be
| outputs = block( | |
| hidden_states, | |
| layer_past, | |
| attention_mask, | |
| head_mask[i], | |
| encoder_hidden_states, # as a positional argument for gradient checkpointing | |
| encoder_attention_mask=encoder_attention_mask, | |
| use_cache=use_cache, | |
| output_attentions=output_attentions, | |
| ) |
Changing the signature is breaking a bit too much!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For viz, as discussed internally, we need this to be breaking
|
cc @ArthurZucker since this might become a bit more breaking than initially thought, and it likely affects more models |
- Compare that the non-zero gradients in a reference run are present in the checkpointing run - Make sure that the forward of at least one gradient checkpointing layer is actually called more than once (as expected during gradient checkpointing backward) Currently there are some problems with Bert-derived MultipleChoice models, when dropout is enabled there are scenarios during gradient checkpointing where `classifier.bias.grad` is None. I don't yet have a good explanation for this, disabling dropout resolves this. I would have understood, if it is dropout on the classification layer but enabling attention dropout is also leading to this behavior. MoE models have selective sparsity depending on the selected experts, for this reason we only compare gradients on parameters collected on the reference backward run.
|
I've updated the general tests. From the commit message: Currently these models are expected to fail since they're not implementing
most likely these as well:
As I explained in the commit message, there's a strange bug with Bert-derived models when testing the I didn't revert the GPTBigCode test changes yet since I first wanted to get an opinion if we want to proceed with these more general tests or not. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is fine, we will likely need to break other model's signature as well? I.e. not only got bigcode. This PR will get bigger than initially thought but let's fix these models
We can allow this for v5 but let's also mention this PR in the v5 thread (#40822) when we merge this.
tests/test_modeling_common.py
Outdated
| # TODO I don't understand why attention_probs_dropout_prob influences classifier.bias in | ||
| # BertForMultipleChoice (and other Bert derived models). Sometimes classifier.bias is None | ||
| # when attention_probs_dropout_prob > 0. This might indicate a bug somewhere. | ||
| if hasattr(config, "hidden_dropout_prob"): | ||
| config.hidden_dropout_prob = 0.0 | ||
| if hasattr(config, "attention_probs_dropout_prob"): | ||
| config.attention_probs_dropout_prob = 0.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is only for the multiple choice class? Or are other model types also affected?
I don't think these have high usage either way so it's fine when we leave an explanation here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is only for the multiple choice class? Or are other model types also affected?
It didn't seem to make a difference for other models with the limited test runs I made. If you want I can limit it to the problematic model class.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's fine with the comment but would be nice if you could double check if it may effect other model (classes)
| encoder_hidden_states: Optional[torch.Tensor] = None, | ||
| layer_past: Optional[Cache] = None, | ||
| attention_mask: Optional[torch.Tensor] = None, | ||
| encoder_hidden_states: Optional[torch.Tensor] = None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For viz, as discussed internally, we need this to be breaking
| # Gradient checkpointing is implemented via GradientCheckpointingLayer, if none is present this is likely | ||
| # an implementation issue. Note we exclude xlstm and zamba* for now since they are still not using | ||
| # GradientCheckpointingLayer. | ||
| if config.model_type not in [ | ||
| "xlstm", | ||
| "zamba", | ||
| "zamba2", | ||
| "swiftformer", | ||
| "janus_vqgan", | ||
| "clvp_encoder", | ||
| "clvp_decoder", | ||
| ]: | ||
| self.assertTrue([m for m in model.modules() if isinstance(m, GradientCheckpointingLayer)]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Like discussed, let's fix these within this PR directly. Seems like these were more like unintentional regressions due to other big PRs
also drop janus from ignore list - only the VQVAE case is without gradient checkpointing and it is doubtful that it is usefule in that case. Training with gradient checkpointing is not tested anyway.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just noticed this small thing in xlstm
Re: Clvp, let's isolate it for now. We can come back later except you have a good idea how to refactor/handle this properly
The implementation of GradientCheckpointingLayers is not trivial and may break behavior that was previously expected. Therefore we keep it as-is for now.
|
[For maintainers] Suggested jobs to run (before merge) run-slow: gpt_bigcode, swiftformer, xlstm, zamba, zamba2 |
Support for gradient checkpointing was lost in the major refactoring in PR #38635 and this is the attempt to re-add it.
I extended the tests to
use_reentrant=TrueandFalsemodel.trainis called so that gradient checkpointing works; this is a limiation of the tests currently used by GPTBigCodeNote that the invocation of
GPTBigCodeBlock.forwardhas changed:layer_pastis now passed as a keyword argument so thatGradientCheckpointingLayer.__call__can see and filter this parameter (use_reentrant=Falsefails otherwise){encoder_}hidden_statesare still passed as positional arguments so thattorch.utils.checkpoint.checkpointreceives them as pos. args and computes gradients for these (kwargs would be filtered byGradientCheckpointingLayer).🚨 Note that this is breaking compatibility by changing the forward signature in
GPTBigCodeBlock.forward!